AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Multi-instruction fine-tuning

# Multi-instruction fine-tuning

Qwen2.5 14B YOYO V2
Qwen2.5-14B-YOYO-V5 is an enhanced version based on the Qwen2.5-14B foundation model, created by merging multiple pre-trained language models.
Large Language Model Transformers
Q
YOYO-AI
14
2
Tulu 65b
Tulu 65B is a 65B-parameter LLaMa model fine-tuned on multi-instruction datasets, representing the outcome of open-resource instruction tuning research with robust comprehensive performance.
Large Language Model Transformers English
T
allenai
20
21
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase